Hack Reactor | BOC | Journal Entry 2
Categories: hack reactor BOC
Overview
Creating the ArtistProfile component for the SoundCrate app
Challenge/Motivation
This component will show a user's profile picture, name, a short bio, and will map over the user's tracks and display them below. If a user is looking at their own profile, it will also have an edit button, allowing the user to navigate to the view for updating and editing their profile.
Actions Taken
When the skeleton for this component had been created, it was being rendered directly in the the other components that needed access to it. Because we decided to have the main App component handle all of the conditional rendering for any "main" view (view that takes up the full page, minus the TopBar and NavBar) I decided to move the component outside of the components that were rendering it separately.
Because there was also another component being rendered similarly (the Play component that displays for the user to listen to a track) I moved them both up to the App component simultaneously.
To do so, I had to create useState variabels in the App component for both artistData and songData as well as the setSongData and setArtistData functions. I also had to create handler functions that would be passed down to the child components (such as any view that is showing a SongCard where the user can click on an artist name or a song name) to allow them to pass data back up to the state here in App upon clicking a related link.
Results Observed
I believe a structure where the main App component houses most of the necessary data and functions and then passes it down to the child components is the cleanest way to handle the rendering and data of the main views. I'm looking forward to the database-related stuff being finished so we can start to use the actual database data in our components, and I think it will allow us to more easily develop the components we are responsible for.
Hack Reactor | BOC | Journal Entry 1
Categories: hack reactor BOC
Overview
Researching and creating a custom theme with Material UI
Challenge/Motivation
The app our team was tasked with building is SoundCrate, which is a TikTok-esque audio recording and collaboration app. It needs to have cohesive component style and structure based on the wireframe and requirements of the client, and we decided to use Material UI for rapid development.
Actions Taken
I started by doing some research on Material UI and theming best practices, and put together a palette based on our initial Proto.io design. Although our initial palette and styles may change as we iteratively develop the app, having a theme will allow us to do so easily from the top-down, as opposed to going back and changing the CSS in multiple places.
Currently, our custom theme has both a primary and secondary color (with light, main, and dark variations), a default font and text color, a default and secondary background color, an error color, as well as two different variations of body text.
Brian and I also put together some documtentation related to the types of MUI components we will likely want to use for our component, along with some basic instructions on how to utilize the theme attributes in our inline styling.
I also put together a ThemeExample component so our team can see what the various theme colors and fonts look like in the browser and how to use them in our code:
Results Observed
I've learned a ton about Material UI over the past few sessions, and I think having a theme in place and ready to use will make development more efficient and will allow each team member to focus on the important pieces of their components instead of having to worry too much about CSS and HTML during development.
0 Comments
Hack Reactor | SDC | Journal Entry 11
Categories: hack reactor SDC AWS
Overview
Implementing nginx load-balancing for my node servers by running 4 ec2 instances, each with copy of my node server and splitting requests between these servers based on nginx configuration options.
Challenge/Motivation
Our task / challenge during this sprint was to implement 2 optimizations on our api servers and analyze the results. My second choice for optimimization, after caching, was load balancing. My suspicion was that it would not have as great an effect on optimization as caching did, but I was hoping that it would still allow me to get another few thousand RPS while staying under SLA of 2000ms and 0.1% errors.
Actions Taken
I created 3 more ec2 instances, installed git, nvm, and npm, cloned down my node api to each of them and installed pm2 for persistent servers. Once I had these 3 other instances up and running (which was relatively simple, since all I had to do was create clones of my first instance and clone down the repo) I went back into my first instance where my nginx proxy was running and added some load-balancing configuration.
log_format upstreamlog '$server_name to: $upstream_addr {$request} ' 'upstream_response_time $upstream_response_time'
' request_time $request_time';
upstream my_http_servers {
least_conn;
server 172.31.48.138:3030;
server 172.31.54.21:3030;
server 172.31.52.228:3030;
server 172.31.59.44:3030;
}
I added a custom log file for my load balancing so I could make sure it was working properly, and then adding the 4 routes for my API servers into the upstream (using the private IP addresses from my ec2 instances). I decided to use the least_conn nginx option, which each request to the server that has the least amount of connections, as this seemed optimal.
Results Observed
So far, I was able to get up to 6500 RPS while staying under the SLA, which is nearly a 1000 RPS improvement over just using caching.
I'm betting that I will be able to improve it even more after some research and changing my nginx configuration. Right now it is the error rate that is going higher than 0.1% as I get closer to 7000 RPS - the response time seems to be fine. I'll be doing some analysis this week on what could be causing the error rate to spike and trying to find some tweaks that could help with this.
0 Comments
Hack Reactor | SDC | Journal Entry 10
Categories: hack reactor SDC AWS
Overview
Adding nginx caching to node / mongo api servers
Challenge/Motivation
The first optimization I made to my node server was adding caching via nginx to save the API responses to disk and response with the cached responses. I was confident that caching would have a significant increase in performance, based on my research and discussions with team members / cohort mates.
Actions Taken
I created a new nginx conf file called cache.conf and added my cache configuration entries:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=custom_cache:10m inactive=60m;
I set the path on my disk to store the cache, and added "levels", which configures nginx to use 2 directories for the cache data, as having a ton of data in a single file can make access to the data slower. keys_zone creates a place in memory for a copy of the keys, which allows nginx to determine if a cache is a hit or a miss without checking the disk. I set the time limit to keeping the cache in memory to 10 minutes. inactive=60m tells nginx how long to keep responses in the cache.
proxy_cache custom_cache;
proxy_cache_revalidate on;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://my_http_servers;
proxy_cache actives the cache detailed at the top of the file. proxy_cache_revalidate allows nginx to use expired cached responses (presuming that they haven't been modified). I added the header X-cache so I could check the headers to see if my cache was working, and then mapped the my server IP to the proxy server using proxy_pass.
Results Observed
After warming up my cache, I was able to get around 5700 RPS with my now cache-enabled API server.
My bet is that this optimization will be the best bang for my buck, but I'm hoping to squeeze out a few more RPS with load balancing as well.
0 Comments
Hack Reactor | SDC | Journal Entry 9
Categories: hack reactor SDC AWS
Overview
Stress-testing the deployed app
Challenge/Motivation
Now that the server and database are both deployed and connected on AWS, our next task was to stress test our routes until the point of failure (pre-optimization) which according to our SLA is a failure rate > 1% or a response time > 2000ms.
Actions Taken
I decided to use loader.io for testing the routes of my deployed app.
- Create a loader.io account
- Download the verification file
- Upload the file to the server and create a GET route specifically for the verification file
- Verify the file
- Create some initial tests: 1 RPS, 100 RPS, 500 RPS, 1000 RPS
- Determine the RPS number that puts the request routes above SLA
Results Observed
For the deployed server, I was able to get up to 725 RPS before going above the 2000ms response time (but did not have any errors), as seen here:
Next steps will be to decide on optimization methods, such as caching, load balancing, payload compression, and other strategies.
0 Comments
0 Comments